Stability of Simultaneous Recurrent Neural Network Dynamics for Static Optimization
نویسندگان
چکیده
A new trainable and recurrent neural optimization algorithm, which has potentially superior capabilities compared to existing neural search algorithms to compute high quality solutions of static optimization problems in a computationally efficient manner, is studied. Specifically, local stability analysis of the dynamics of a relaxation-based recurrent neural network, the Simultaneous Recurrent Neural network, for static optimization problems is presented. The results of theoretical as well as its correlated simulation study lead to the conjecture that the Simultaneous Recurrent Neural network dynamics appears to demonstrate desirable stability characteristics. Dynamics often converge to fixed points upon conclusion of a relaxation cycle, which facilitates adaptation of weights through one of many fixed-point training algorithms. The trainability of this neural algorithm results relatively high quality solutions to be computed for large-scale problem instances with computational efficiency, particularly when compared to solutions computed by the Hopfield network and its derivative algorithms including those with stochastic search control mechanisms.
منابع مشابه
Theoretical Exploration on Local Stability of Simultaneous Recurrent Neural Network Dynamics for Static Combinatorial Optimization
Abstract— This paper presents a theoretical local stability analysis of the Simultaneous Recurrent Neural network (SRN) as a nonlinear dynamic system operating in relaxation mode for static combinatorial optimization. Specifically, stability of hypercube corners of the SRN dynamics, which are equilibrium points for high-gain node dynamics and useful entities to represent solutions of combinator...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملThe Simultaneous Recurrent Neural Network for Addressing the Scaling Problem in Static Optimization
A trainable recurrent neural network, Simultaneous Recurrent Neural network, is proposed to address the scaling problem faced by neural network algorithms in static optimization. The proposed algorithm derives its computational power to address the scaling problem through its ability to "learn" compared to existing recurrent neural algorithms, which are not trainable. Recurrent backpropagation ...
متن کاملTraining Simultaneous Recurrent Neural Network with Resilient Propagation for Static Optimization
This paper proposes a non-recurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxation-mode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network weights through the non-recurrent training algorithm, resilient backpropagation, are formulated ...
متن کاملA Recurrent Neural Network Model for solving CCR Model in Data Envelopment Analysis
In this paper, we present a recurrent neural network model for solving CCR Model in Data Envelopment Analysis (DEA). The proposed neural network model is derived from an unconstrained minimization problem. In the theoretical aspect, it is shown that the proposed neural network is stable in the sense of Lyapunov and globally convergent to the optimal solution of CCR model. The proposed model has...
متن کامل